Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning
نویسندگان
چکیده
Open-set recognition and adversarial defense study two key aspects of deep learning that are vital for real-world deployment. The objective open-set is to identify samples from classes during testing, while aims robustify the network against images perturbed by imperceptible noise. This paper demonstrates systems vulnerable samples. Furthermore, this shows mechanisms trained on known unable generalize well Motivated these observations, we emphasize necessity an Open-Set Adversarial Defense (OSAD) mechanism. proposes Network with Clean-Adversarial Mutual Learning (OSDN-CAML) as a solution OSAD problem. proposed designs encoder dual-attentive feature-denoising layers coupled classifier learn noise-free latent feature representation, which adaptively removes noise guided channel spatial-wise attentive filters. Several techniques exploited informative space aim improving performance recognition. First, incorporate decoder ensure clean can be reconstructed obtained features. Then, self-supervision used features enough carry out auxiliary task. Finally, exploit more complementary knowledge image classification facilitate denoising search generalized local minimum recognition, further propose clean-adversarial mutual learning, where peer (classifying images) introduced mutually images). We testing protocol evaluate show effectiveness method white-box attacks, black-box rectangular occlusion attack in multiple object datasets.
منابع مشابه
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN
We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network. We alternately train both classifier and generator networks. The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image. Simultaneously, the classifier network is trained to classify correctly bo...
متن کاملAdversarial Examples Generation and Defense Based on Generative Adversarial Network
We propose a novel generative adversarial network to generate and defend adversarial examples for deep neural networks (DNN). The adversarial stability of a network D is improved by training alternatively with an additional network G. Our experiment is carried out on MNIST, and the adversarial examples are generated in an efficient way compared with wildly-used gradient based methods. After tra...
متن کاملDefense against Universal Adversarial Perturbations
Recent advances in Deep Learning show the existence of image-agnostic quasi-imperceptible perturbations that when applied to ‘any’ image can fool a state-of-the-art network classifier to change its prediction about the image label. These ‘Universal Adversarial Perturbations’ pose a serious threat to the success of Deep Learning in practice. We present the first dedicated framework to effectivel...
متن کاملAdversarial and Clean Data Are Not Twins
Adversarial attack has cast a shadow on the massive success of deep neural networks. Despite being almost visually identical to the clean data, the adversarial images can fool deep neural networks into wrong predictions with very high confidence. In this paper, however, we show that we can build a simple binary classifier separating the adversarial apart from the clean data with accuracy over 9...
متن کاملOnline Learning with Adversarial Delays
We study the performance of standard online learning algorithms when the feedback is delayed by an adversary. We show that online-gradient-descent [1] and follow-the-perturbed-leader [2] achieve regret O( √ D) in the delayed setting, where D is the sum of delays of each round’s feedback. This bound collapses to an optimal O( √ T ) bound in the usual setting of no delays (where D = T ). Our main...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Computer Vision
سال: 2022
ISSN: ['0920-5691', '1573-1405']
DOI: https://doi.org/10.1007/s11263-022-01581-0